Optimal rates for stochastic convex optimization under Tsybakov noise condition
نویسندگان
چکیده
We focus on the problem of minimizing a convex function f over a convex set S given T queries to a stochastic first order oracle. We argue that the complexity of convex minimization is only determined by the rate of growth of the function around its minimizer xf,S , as quantified by a Tsybakov-like noise condition. Specifically, we prove that if f grows at least as fast as ‖x − xf,S‖ around its minimum, for some κ > 1, then the optimal rate of learning f(xf,S) is Θ(T − κ 2κ−2 ). The classic rate Θ(1/ √ T ) for convex functions and Θ(1/T ) for strongly convex functions are special cases of our result for κ→∞ and κ = 2, and even faster rates are attained for κ < 2. We also derive tight bounds for the complexity of learning xf,S , where the optimal rate is Θ(T− 1 2κ−2 ). Interestingly, these precise rates for convex optimization also characterize the complexity of active learning and our results further strengthen the connections between the two fields, both of which rely on feedback-driven queries.
منابع مشابه
Optimal rates for first-order stochastic convex optimization under Tsybakov noise condition
We focus on the problem of minimizing a convex function f over a convex set S given T queries to a stochastic first order oracle. We argue that the complexity of convex minimization is only determined by the rate of growth of the function around its minimizer xf,S , as quantified by a Tsybakov-like noise condition. Specifically, we prove that if f grows at least as fast as ‖x − xf,S‖ around its...
متن کاملAsynchronous stochastic convex optimization: the noise is in the noise and SGD don't care
We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures. Roughly, the noise inherent to the stochastic approximation scheme dominates any noise from...
متن کاملAsynchronous stochastic convex optimization
We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures. Roughly, the noise inherent to the stochastic approximation scheme dominates any noise from...
متن کاملA Convex Formulation for Mixed Regression with Two Components: Minimax Optimal Rates
We consider the mixed regression problem with two components, under adversarial and stochastic noise. We give a convex optimization formulation that provably recovers the true solution, and provide upper bounds on the recovery errors for both arbitrary noise and stochastic noise settings. We also give matching minimax lower bounds (up to log factors), showing that under certain assumptions, our...
متن کاملNoise-Adaptive Margin-Based Active Learning and Lower Bounds under Tsybakov Noise Condition
We present a simple noise-robust margin-based active learning algorithm to find homogeneous (passing the origin) linear separators and analyze its error convergence when labels are corrupted by noise. We show that when the imposed noise satisfies the Tsybakov low noise condition (Mammen, Tsybakov, and others 1999; Tsybakov 2004) the algorithm is able to adapt to unknown level of noise and achie...
متن کامل